Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38586037

RESUMO

Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, predictions from a physiologically plausible model of temporal-coherence-based segregation suggest that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.

2.
J Assoc Res Otolaryngol ; 25(1): 35-51, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38278969

RESUMO

PURPOSE: Frequency selectivity is a fundamental property of the peripheral auditory system; however, the invasiveness of auditory nerve (AN) experiments limits its study in the human ear. Compound action potentials (CAPs) associated with forward masking have been suggested as an alternative to assess cochlear frequency selectivity. Previous methods relied on an empirical comparison of AN and CAP tuning curves in animal models, arguably not taking full advantage of the information contained in forward-masked CAP waveforms. METHODS: To improve the estimation of cochlear frequency selectivity based on the CAP, we introduce a convolution model to fit forward-masked CAP waveforms. The model generates masking patterns that, when convolved with a unitary response, can predict the masking of the CAP waveform induced by Gaussian noise maskers. Model parameters, including those characterizing frequency selectivity, are fine-tuned by minimizing waveform prediction errors across numerous masking conditions, yielding robust estimates. RESULTS: The method was applied to click-evoked CAPs at the round window of anesthetized chinchillas using notched-noise maskers with various notch widths and attenuations. The estimated quality factor Q10 as a function of center frequency is shown to closely match the average quality factor obtained from AN fiber tuning curves, without the need for an empirical correction factor. CONCLUSION: This study establishes a moderately invasive method for estimating cochlear frequency selectivity with potential applicability to other animal species or humans. Beyond the estimation of frequency selectivity, the proposed model proved to be remarkably accurate in fitting forward-masked CAP responses and could be extended to study more complex aspects of cochlear signal processing (e.g., compressive nonlinearities).


Assuntos
Cóclea , Nervo Coclear , Animais , Humanos , Potenciais de Ação , Janela da Cóclea , Chinchila
3.
J Neurosci Methods ; 398: 109954, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37625650

RESUMO

BACKGROUND: Disabling hearing loss affects nearly 466 million people worldwide (World Health Organization). The auditory brainstem response (ABR) is the most common non-invasive clinical measure of evoked potentials, e.g., as an objective measure for universal newborn hearing screening. In research, the ABR is widely used for estimating hearing thresholds and cochlear synaptopathy in animal models of hearing loss. The ABR contains multiple waves representing neural activity across different peripheral auditory pathway stages, which arise within the first 10 ms after stimulus onset. Multi-channel (e.g., 32 or higher) caps provide robust measures for a wide variety of EEG applications for the study of human hearing. However, translational studies using preclinical animal models typically rely on only a few subdermal electrodes. NEW METHOD: We evaluated the feasibility of a 32-channel rodent EEG mini-cap for improving the reliability of ABR measures in chinchillas, a common model of human hearing. RESULTS: After confirming initial feasibility, a systematic experimental design tested five potential sources of variability inherent to the mini-cap methodology. We found each source of variance minimally affected mini-cap ABR waveform morphology, thresholds, and wave-1 amplitudes. COMPARISON WITH EXISTING METHOD: The mini-cap methodology was statistically more robust and less variable than the conventional subdermal-needle methodology, most notably when analyzing ABR thresholds. Additionally, fewer repetitions were required to produce a robust ABR response when using the mini-cap. CONCLUSIONS: These results suggest the EEG mini-cap can improve translational studies of peripheral auditory evoked responses. Future work will evaluate the potential of the mini-cap to improve the reliability of more centrally evoked (e.g., cortical) EEG responses.


Assuntos
Surdez , Perda Auditiva , Animais , Recém-Nascido , Humanos , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Chinchila , Ruído , Reprodutibilidade dos Testes , Limiar Auditivo/fisiologia , Perda Auditiva/diagnóstico , Eletroencefalografia , Estimulação Acústica
4.
Sci Rep ; 13(1): 10216, 2023 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-37353552

RESUMO

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Percepção da Fala/fisiologia , Ruído , Percepção Auditiva , Eletroencefalografia
5.
bioRxiv ; 2023 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-36712081

RESUMO

Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.

6.
Hear Res ; 426: 108586, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35953357

RESUMO

Listeners with sensorineural hearing loss (SNHL) have substantial perceptual deficits, especially in noisy environments. Unfortunately, speech-intelligibility models have limited success in predicting the performance of listeners with hearing loss. A better understanding of the various suprathreshold factors that contribute to neural-coding degradations of speech in noisy conditions will facilitate better modeling and clinical outcomes. Here, we highlight the importance of one physiological factor that has received minimal attention to date, termed distorted tonotopy, which refers to a disruption in the mapping between acoustic frequency and cochlear place that is a hallmark of normal hearing. More so than commonly assumed factors (e.g., threshold elevation, reduced frequency selectivity, diminished temporal coding), distorted tonotopy severely degrades the neural representations of speech (particularly in noise) in single- and across-fiber responses in the auditory nerve following noise-induced hearing loss. Key results include: 1) effects of distorted tonotopy depend on stimulus spectral bandwidth and timbre, 2) distorted tonotopy increases across-fiber correlation and thus reduces information capacity to the brain, and 3) its effects vary across etiologies, which may contribute to individual differences. These results motivate the development and testing of noninvasive measures that can assess the severity of distorted tonotopy in human listeners. The development of such noninvasive measures of distorted tonotopy would advance precision-audiological approaches to improving diagnostics and rehabilitation for listeners with SNHL.


Assuntos
Perda Auditiva Provocada por Ruído , Perda Auditiva Neurossensorial , Percepção da Fala , Humanos , Perda Auditiva Provocada por Ruído/diagnóstico , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Perda Auditiva Neurossensorial/diagnóstico , Ruído/efeitos adversos , Limiar Auditivo/fisiologia
7.
Commun Biol ; 5(1): 733, 2022 07 22.
Artigo em Inglês | MEDLINE | ID: mdl-35869142

RESUMO

Animal models suggest that cochlear afferent nerve endings may be more vulnerable than sensory hair cells to damage from acoustic overexposure and aging. Because neural degeneration without hair-cell loss cannot be detected in standard clinical audiometry, whether such damage occurs in humans is hotly debated. Here, we address this debate through co-ordinated experiments in at-risk humans and a wild-type chinchilla model. Cochlear neuropathy leads to large and sustained reductions of the wideband middle-ear muscle reflex in chinchillas. Analogously, human wideband reflex measures revealed distinct damage patterns in middle age, and in young individuals with histories of high acoustic exposure. Analysis of an independent large public dataset and additional measurements using clinical equipment corroborated the patterns revealed by our targeted cross-species experiments. Taken together, our results suggest that cochlear neural damage is widespread even in populations with clinically normal hearing.


Assuntos
Cóclea , Células Ciliadas Auditivas , Estimulação Acústica , Animais , Chinchila , Células Ciliadas Auditivas/fisiologia , Audição , Humanos , Pessoa de Meia-Idade
8.
J Neurosci ; 42(8): 1477-1490, 2022 02 23.
Artigo em Inglês | MEDLINE | ID: mdl-34983817

RESUMO

Listeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a connected speech sentence in noise from anesthetized male chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL). Our results demonstrated that temporal precision was not degraded following acoustic trauma, and furthermore that sharpness of cochlear frequency tuning was not the major factor affecting impaired peripheral coding of connected speech in noise. Rather, the loss of cochlear tonotopy, a hallmark of NH, contributed the most to both consonant-coding and vowel-coding degradations. Because distorted tonotopy varies in degree across etiologies (e.g., noise exposure, age), these results have important implications for understanding and treating individual differences in speech perception for people suffering from SNHL.SIGNIFICANCE STATEMENT Difficulty understanding speech in noise is the primary complaint in audiology clinics and can leave people with sensorineural hearing loss (SNHL) suffering from communication difficulties that affect their professional, social, and family lives, as well as their mental health. We measured single-neuron responses from a preclinical SNHL animal model to characterize salient neural-coding deficits for naturally spoken speech in noise. We found the major mechanism affecting neural coding was not a commonly assumed factor, but rather a disruption of tonotopicity, the systematic mapping of acoustic frequency to cochlear place that is a hallmark of normal hearing. Because the degree of distorted tonotopy varies across hearing-loss etiologies, these results have important implications for precision audiology approaches to diagnosis and treatment of SNHL.


Assuntos
Perda Auditiva Provocada por Ruído , Perda Auditiva Neurossensorial , Percepção da Fala , Estimulação Acústica/métodos , Animais , Limiar Auditivo/fisiologia , Perda Auditiva Neurossensorial/etiologia , Humanos , Masculino , Ruído , Fala , Percepção da Fala/fisiologia
9.
J Neurosci ; 42(2): 240-254, 2022 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-34764159

RESUMO

Temporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e., the first auditory region supporting cross-channel processing over a wide frequency range). Accordingly, we hypothesized that aspects of temporal-coherence processing that could be realized in early auditory areas may shape speech understanding in noise. We then explored whether physiologically plausible computational models could account for results from a behavioral experiment that measured consonant categorization in different masking conditions. We tested whether within-channel masking of target-speech modulations predicted consonant confusions across the different conditions and whether predictions were improved by adding across-channel temporal-coherence processing mirroring the computations known to exist in the cochlear nucleus. Consonant confusions provide a rich characterization of error patterns in speech categorization, and are thus crucial for rigorously testing models of speech perception; however, to the best of our knowledge, they have not been used in prior studies of scene analysis. We find that within-channel modulation masking can reasonably account for category confusions, but that it fails when temporal fine structure cues are unavailable. However, the addition of across-channel temporal-coherence processing significantly improves confusion predictions across all tested conditions. Our results suggest that temporal-coherence processing strongly shapes speech understanding in noise and that physiological computations that exist early along the auditory pathway may contribute to this process.SIGNIFICANCE STATEMENT Temporal coherence of sound fluctuations across distinct frequency channels is thought to be important for auditory scene analysis. Prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, and it was unknown whether speech understanding in noise may be shaped by across-channel processing that exists in earlier auditory areas. Using physiologically plausible computational modeling to predict consonant confusions across different listening conditions, we find that across-channel temporal coherence contributes significantly to scene analysis and speech perception and that such processing may arise in the auditory pathway as early as the brainstem. By virtue of providing a richer characterization of error patterns not obtainable with just intelligibility scores, consonant confusions yield unique insight into scene analysis mechanisms.


Assuntos
Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Cóclea/fisiologia , Fala/fisiologia , Estimulação Acústica , Limiar Auditivo/fisiologia , Humanos , Modelos Neurológicos , Mascaramento Perceptivo
10.
J Acoust Soc Am ; 150(5): 3581, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34852572

RESUMO

A difference in fundamental frequency (F0) between two vowels is an important segregation cue prior to identifying concurrent vowels. To understand the effects of this cue on identification due to age and hearing loss, Chintanpalli, Ahlstrom, and Dubno [(2016). J. Acoust. Soc. Am. 140, 4142-4153] collected concurrent vowel scores across F0 differences for younger adults with normal hearing (YNH), older adults with normal hearing (ONH), and older adults with hearing loss (OHI). The current modeling study predicts these concurrent vowel scores to understand age and hearing loss effects. The YNH model cascaded the temporal responses of an auditory-nerve model from Bruce, Efrani, and Zilany [(2018). Hear. Res. 360, 40-45] with a modified F0-guided segregation algorithm from Meddis and Hewitt [(1992). J. Acoust. Soc. Am. 91, 233-245] to predict concurrent vowel scores. The ONH model included endocochlear-potential loss, while the OHI model also included hair cell damage; however, both models incorporated cochlear synaptopathy, with a larger effect for OHI. Compared with the YNH model, concurrent vowel scores were reduced across F0 differences for ONH and OHI models, with the lowest scores for OHI. These patterns successfully captured the age and hearing loss effects in the concurrent-vowel data. The predictions suggest that the inability to utilize an F0-guided segregation cue, resulting from peripheral changes, may reduce scores for ONH and OHI listeners.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Idoso , Nervo Coclear , Audição , Perda Auditiva/diagnóstico , Humanos
11.
J Acoust Soc Am ; 150(3): 2230, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598642

RESUMO

A fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap. We find that the neural envelope-domain signal-to-noise ratio in target-speech encoding, which is shaped by masker modulations, predicts intelligibility over a range of strategically chosen realistic listening conditions unseen by the predictive model. This provides neurophysiological evidence for modulation masking. Moreover, using high-resolution vocoding to carefully control peripheral envelopes, we show that target-envelope coding fidelity in the brain depends not only on envelopes conveyed by the cochlea, but also on the temporal fine structure (TFS), which supports scene segregation. Our results are consistent with the notion that temporal coherence of sound elements across envelopes and/or TFS influences scene analysis and attentive selection of a target sound. Our findings also inform speech-intelligibility models and technologies attempting to improve real-world speech communication.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Percepção Auditiva , Humanos , Mascaramento Perceptivo , Razão Sinal-Ruído
12.
J Acoust Soc Am ; 150(4): 2664, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717498

RESUMO

To understand the mechanisms of speech perception in everyday listening environments, it is important to elucidate the relative contributions of different acoustic cues in transmitting phonetic content. Previous studies suggest that the envelope of speech in different frequency bands conveys most speech content, while the temporal fine structure (TFS) can aid in segregating target speech from background noise. However, the role of TFS in conveying phonetic content beyond what envelopes convey for intact speech in complex acoustic scenes is poorly understood. The present study addressed this question using online psychophysical experiments to measure the identification of consonants in multi-talker babble for intelligibility-matched intact and 64-channel envelope-vocoded stimuli. Consonant confusion patterns revealed that listeners had a greater tendency in the vocoded (versus intact) condition to be biased toward reporting that they heard an unvoiced consonant, despite envelope and place cues being largely preserved. This result was replicated when babble instances were varied across independent experiments, suggesting that TFS conveys voicing information beyond what is conveyed by envelopes for intact speech in babble. Given that multi-talker babble is a masker that is ubiquitous in everyday environments, this finding has implications for the design of assistive listening devices such as cochlear implants.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Ruído/efeitos adversos , Mascaramento Perceptivo , Fonética , Fala , Inteligibilidade da Fala
13.
PLoS Comput Biol ; 17(2): e1008155, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33617548

RESUMO

Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Modelos Neurológicos , Estimulação Acústica , Animais , Chinchila/fisiologia , Nervo Coclear/fisiologia , Biologia Computacional , Modelos Animais de Doenças , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Humanos , Modelos Animais , Dinâmica não Linear , Psicoacústica , Som , Análise Espaço-Temporal , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Pesquisa Translacional Biomédica
14.
J Assoc Res Otolaryngol ; 22(1): 51-66, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33188506

RESUMO

Animal models of noise-induced hearing loss (NIHL) show a dramatic mismatch between cochlear characteristic frequency (CF, based on place of innervation) and the dominant response frequency in single auditory-nerve-fiber responses to broadband sounds (i.e., distorted tonotopy, DT). This noise trauma effect is associated with decreased frequency-tuning-curve (FTC) tip-to-tail ratio, which results from decreased tip sensitivity and enhanced tail sensitivity. Notably, DT is more severe for noise trauma than for metabolic (e.g., age-related) losses of comparable degree, suggesting that individual differences in DT may contribute to speech intelligibility differences in patients with similar audiograms. Although DT has implications for many neural-coding theories for real-world sounds, it has primarily been explored in single-neuron studies that are not viable with humans. Thus, there are no noninvasive measures to detect DT. Here, frequency following responses (FFRs) to a conversational speech sentence were recorded in anesthetized male chinchillas with either normal hearing or NIHL. Tonotopic sources of FFR envelope and temporal fine structure (TFS) were evaluated in normal-hearing chinchillas. Results suggest that FFR envelope primarily reflects activity from high-frequency neurons, whereas FFR-TFS receives broad tonotopic contributions. Representation of low- and high-frequency speech power in FFRs was also assessed. FFRs in hearing-impaired animals were dominated by low-frequency stimulus power, consistent with oversensitivity of high-frequency neurons to low-frequency power. These results suggest that DT can be diagnosed noninvasively. A normalized DT metric computed from speech FFRs provides a potential diagnostic tool to test for DT in humans. A sensitive noninvasive DT metric could be used to evaluate perceptual consequences of DT and to optimize hearing-aid amplification strategies to improve tonotopic coding for hearing-impaired listeners.


Assuntos
Estimulação Acústica/efeitos adversos , Nervo Coclear , Perda Auditiva Provocada por Ruído , Percepção da Fala , Animais , Chinchila , Nervo Coclear/lesões , Humanos , Masculino , Condução Nervosa , Ruído , Fala
15.
J Acoust Soc Am ; 146(5): 3710, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31795699

RESUMO

The chinchilla animal model for noise-induced hearing loss has an extensive history spanning more than 50 years. Many behavioral, anatomical, and physiological characteristics of the chinchilla make it a valuable animal model for hearing science. These include similarities with human hearing frequency and intensity sensitivity, the ability to be trained behaviorally with acoustic stimuli relevant to human hearing, a docile nature that allows many physiological measures to be made in an awake state, physiological robustness that allows for data to be collected from all levels of the auditory system, and the ability to model various types of conductive and sensorineural hearing losses that mimic pathologies observed in humans. Given these attributes, chinchillas have been used repeatedly to study anatomical, physiological, and behavioral effects of continuous and impulse noise exposures that produce either temporary or permanent threshold shifts. Based on the mechanistic insights from noise-exposure studies, chinchillas have also been used in pre-clinical drug studies for the prevention and rescue of noise-induced hearing loss. This review paper highlights the role of the chinchilla model in hearing science, its important contributions, and its advantages and limitations.


Assuntos
Chinchila/fisiologia , Modelos Animais de Doenças , Perda Auditiva Provocada por Ruído/fisiopatologia , Animais , Comportamento Animal , Audição , Perda Auditiva Provocada por Ruído/etiologia , Perda Auditiva Provocada por Ruído/patologia , Humanos , Especificidade da Espécie
16.
J Neurosci ; 39(35): 6879-6887, 2019 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-31285299

RESUMO

Speech intelligibility can vary dramatically between individuals with similar clinically defined severity of hearing loss based on the audiogram. These perceptual differences, despite equal audiometric-threshold elevation, are often assumed to reflect central-processing variations. Here, we compared peripheral-processing in auditory nerve (AN) fibers of male chinchillas between two prevalent hearing loss etiologies: metabolic hearing loss (MHL) and noise-induced hearing loss (NIHL). MHL results from age-related reduction of the endocochlear potential due to atrophy of the stria vascularis. MHL in the present study was induced using furosemide, which provides a validated model of age-related MHL in young animals by reversibly inhibiting the endocochlear potential. Effects of MHL on peripheral processing were assessed using Wiener-kernel (system identification) analyses of single AN fiber responses to broadband noise, for direct comparison to previously published AN responses from animals with NIHL. Wiener-kernel analyses show that even mild NIHL causes grossly abnormal coding of low-frequency stimulus components. In contrast, for MHL the same abnormal coding was only observed with moderate to severe loss. For equal sensitivity loss, coding impairment was substantially less severe with MHL than with NIHL, probably due to greater preservation of the tip-to-tail ratio of cochlear frequency tuning with MHL compared with NIHL rather than different intrinsic AN properties. Differences in peripheral neural coding between these two pathologies-the more severe of which, NIHL, is preventable-likely contribute to individual speech perception differences. Our results underscore the need to minimize noise overexposure and for strategies to personalize diagnosis and treatment for individuals with sensorineural hearing loss.SIGNIFICANCE STATEMENT Differences in speech perception ability between individuals with similar clinically defined severity of hearing loss are often assumed to reflect central neural-processing differences. Here, we demonstrate for the first time that peripheral neural processing of complex sounds differs dramatically between the two most common etiologies of hearing loss. Greater processing impairment with noise-induced compared with an age-related (metabolic) hearing loss etiology may explain heightened speech perception difficulties in people overexposed to loud environments. These results highlight the need for public policies to prevent noise-induced hearing loss, an entirely avoidable hearing loss etiology, and for personalized strategies to diagnose and treat sensorineural hearing loss.


Assuntos
Percepção Auditiva/fisiologia , Nervo Coclear/fisiopatologia , Perda Auditiva Provocada por Ruído/fisiopatologia , Perda Auditiva Neurossensorial/fisiopatologia , Audição/fisiologia , Animais , Limiar Auditivo , Chinchila , Modelos Animais de Doenças , Furosemida , Perda Auditiva Neurossensorial/induzido quimicamente , Perda Auditiva Neurossensorial/etiologia , Masculino
17.
Neuroscience ; 407: 53-66, 2019 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-30853540

RESUMO

Studies in multiple species, including in post-mortem human tissue, have shown that normal aging and/or acoustic overexposure can lead to a significant loss of afferent synapses innervating the cochlea. Hypothetically, this cochlear synaptopathy can lead to perceptual deficits in challenging environments and can contribute to central neural effects such as tinnitus. However, because cochlear synaptopathy can occur without any measurable changes in audiometric thresholds, synaptopathy can remain hidden from standard clinical diagnostics. To understand the perceptual sequelae of synaptopathy and to evaluate the efficacy of emerging therapies, sensitive and specific non-invasive measures at the individual patient level need to be established. Pioneering experiments in specific mice strains have helped identify many candidate assays. These include auditory brainstem responses, the middle-ear muscle reflex, envelope-following responses, and extended high-frequency audiograms. Unfortunately, because these non-invasive measures can be also affected by extraneous factors other than synaptopathy, their application and interpretation in humans is not straightforward. Here, we systematically examine six extraneous factors through a series of interrelated human experiments aimed at understanding their effects. Using strategies that may help mitigate the effects of such extraneous factors, we then show that these suprathreshold physiological assays exhibit across-individual correlations with each other indicative of contributions from a common physiological source consistent with cochlear synaptopathy. Finally, we discuss the application of these assays to two key outstanding questions, and discuss some barriers that still remain. This article is part of a Special Issue entitled: Hearing Loss, Tinnitus, Hyperacusis, Central Gain.


Assuntos
Limiar Auditivo/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Individualidade , Zumbido/etiologia , Cóclea/fisiologia , Audição/fisiologia , Perda Auditiva Provocada por Ruído/complicações , Humanos , Sinapses/fisiologia , Zumbido/fisiopatologia
18.
Hear Res ; 377: 109-121, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30927686

RESUMO

The relative importance of neural temporal and place coding in auditory perception is still a matter of much debate. The current article is a compilation of viewpoints from leading auditory psychophysicists and physiologists regarding the upper frequency limit for the use of neural phase locking to code temporal fine structure in humans. While phase locking is used for binaural processing up to about 1500 Hz, there is disagreement regarding the use of monaural phase-locking information at higher frequencies. Estimates of the general upper limit proposed by the contributors range from 1500 to 10000 Hz. The arguments depend on whether or not phase locking is needed to explain psychophysical discrimination performance at frequencies above 1500 Hz, and whether or not the phase-locked neural representation is sufficiently robust at these frequencies to provide useable information. The contributors suggest key experiments that may help to resolve this issue, and experimental findings that may cause them to change their minds. This issue is of crucial importance to our understanding of the neural basis of auditory perception in general, and of pitch perception in particular.


Assuntos
Nervo Coclear/fisiologia , Sinais (Psicologia) , Percepção da Altura Sonora , Percepção do Tempo , Estimulação Acústica , Humanos , Movimento (Física) , Periodicidade , Pressão , Psicoacústica , Som
19.
Acta Acust United Acust ; 104(5): 922-925, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30369861

RESUMO

When presented with two vowels simultaneously, humans are often able to identify the constituent vowels. Computational models exist that simulate this ability, however they predict listener confusions poorly, particularly in the case where the two vowels have the same fundamental frequency. Presented here is a model that is uniquely able to predict the combined representation of concurrent vowels. The given model is able to predict listener's systematic perceptual decisions to a high degree of accuracy.

20.
J Acoust Soc Am ; 143(3): 1287, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29604696

RESUMO

Sensitivity to interaural time differences (ITDs) in envelope and temporal fine structure (TFS) of amplitude-modulated (AM) tones was assessed for young and older subjects, all with clinically normal hearing at the carrier frequencies of 250 and 500 Hz. Some subjects had hearing loss at higher frequencies. In experiment 1, thresholds for detecting changes in ITD were measured when the ITD was present in the TFS alone (ITDTFS), the envelope alone (ITDENV), or both (ITDTFS/ENV). Thresholds tended to be higher for the older than for the young subjects. ITDENV thresholds were much higher than ITDTFS thresholds, while ITDTFS/ENV thresholds were similar to ITDTFS thresholds. ITDTFS thresholds were lower than ITD thresholds obtained with an unmodulated pure tone, indicating that uninformative AM can improve ITDTFS discrimination. In experiment 2, equally detectable values of ITDTFS and ITDENV were combined so as to give consistent or inconsistent lateralization. There were large individual differences, but several subjects gave scores that were much higher than would be expected from the optimal combination of independent sources of information, even for the inconsistent condition. It is suggested that ITDTFS and ITDENV cues are processed partly independently, but that both cues influence lateralization judgments, even when one cue is uninformative.


Assuntos
Percepção Auditiva , Estimulação Acústica , Adulto , Fatores Etários , Idoso , Limiar Auditivo , Cóclea/fisiologia , Perda Auditiva/fisiopatologia , Humanos , Pessoa de Meia-Idade , Percepção do Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...